%%javascript
require.config({
    paths: {
        d3: 'https://d3js.org/d3.v5.min'
    }
});
print('\n'.join(f'{m.__name__} {m.__version__}' for m in globals().values() if getattr(m, '__version__', None)))
numpy 1.18.1
pandas 1.0.4
scipy 1.4.1
seaborn 0.11.0
ipywidgets 7.4.2

1. Background

$$\newcommand{\kon}{k_{\mathrm{on}}} \newcommand{\koff}{k_{\mathrm{off}}} \newcommand{\kcat}{k_{\mathrm{cat}}} \newcommand{\kuncat}{k_{\mathrm{uncat}}} \newcommand{\kms}{k_{m,\mathrm{S}}} \newcommand{\kmp}{k_{m,\mathrm{P}}} \newcommand{\dSdt}{\frac{d[\mathrm{S}]}{dt}} \newcommand{\dEdt}{\frac{d[\mathrm{E}]}{dt}} \newcommand{\dESdt}{\frac{d[\mathrm{ES}]}{dt}} \newcommand{\dPdt}{\frac{d[\mathrm{P}]}{dt}}$$

1.1 Enzyme Kinetics

Enzymes catalyze many critical chemical reactions in cells.

Describing a cell with a mathematical model (a long-standing goal of computational biologists) would entail modelling each enzyme-catalyzed chemical reaction.

However, although we may know the scheme for many enzymatic reactions (the responsible enzyme, the associated substrates, and resultant products) we are often missing many of the details needed to construct a faithful mathematical model of the reaction.

Let's begin by introducing the mathematical model used to describe enzymatic reaction schemes. Consider the following enzymatically-catalyzed (uni uni) chemical reaction scheme:

$$ E+S \underset{\koff}{\overset{\kon}{\rightleftarrows}} ES \underset{\kuncat}{\overset{\kcat}{\rightleftarrows}}E+P $$

In this scheme E is an enzyme, S is its substrate, ES is the enzyme-substrate complex, which is an intermediate, and P is the product of the reaction. Each of those chemical species has a concentration in a fixed volume, which we denote with brackets (e.g. $[\mathrm{E}]$ = enzyme concentration).

If we make the simplifying assumption that the 4 molecular species are 'well-mixed' in solution, we can invoke the 'Law of Mass Action' under which the rate of each of the four included reactions is linear in the concentrations of the reactants (with an associated coefficient called the rate constant). The reactions in the above scheme are: enzyme-substrate association ($\kon$), dissociation ($\koff$), enzyme catalysis of substrate into product ($\kcat$), and enzyme-product re-association ("uncatalysis", $\kuncat$). The designation of 'substrate' and 'product' is our choice -- the model is entirely symmetric, which is reflected in the associated ODEs:

$$\begin{aligned} \frac{d[\mathrm{S}]}{dt} &= k_{\mathrm{off}}[\mathrm{ES}] - k_{\mathrm{on}}[\mathrm{E}][\mathrm{S}] \\ \frac{d[\mathrm{E}]}{dt} &= k_{\mathrm{off}}[\mathrm{ES}] - k_{\mathrm{on}}[\mathrm{E}][\mathrm{S}] + k_{\mathrm{cat}}[\mathrm{ES}] - k_{\mathrm{uncat}}[\mathrm{E}][\mathrm{P}] \\ \frac{d[\mathrm{ES}]}{dt} &= - k_{\mathrm{off}}[\mathrm{ES}] + k_{\mathrm{on}}[\mathrm{E}][\mathrm{S}] - k_{\mathrm{cat}}[\mathrm{ES}] + k_{\mathrm{uncat}}[\mathrm{E}][\mathrm{P}] \\ \frac{d[\mathrm{P}]}{dt} &= k_{\mathrm{cat}}[\mathrm{ES}] - k_{\mathrm{uncat}}[\mathrm{E}][\mathrm{P}] \end{aligned}$$

This differential equation model describing the (deterministic) chemical kinetics for an enzymatically-catalyzed reaction in well-mixed conditions contains 4 kinetic parameters, i.e. 4 degrees of freedom, which we do not know a priori. These will be the subject of inference.

Note: the intracellular environment is not best described as well-mixed, and models of ’Macromolecular Crowding’ have led to more accurate rate laws for these reactions in vivo. However, we will retain the well-mixed assumption for now.

1.2 Parameter Inference

There are 3 typical problems associated with ODE models:

  • Supplied with a complete specification of the system, the forward problem is to integrate the differential equations from some initial conditions forwards in time and predict the trajectory of the system. This is what is typically meant by "solving" the ODE system, but exact analytical solutions are rare, and numerical methods are often brought to bear to approximate system trajectories.
  • Supplied with one or more trajectories (data) but incomplete specification of the system, the inverse problem is to estimate parameters of the system (coefficients in the ODE expressions).
  • Finally, given some manipulable inputs, the control problem is to drive the system towards some desired state.

This post will explore a range of approaches for the inverse problem. Our goal will be to estimate the kinetic parameters of enzymatically-catalyzed chemical reactions from timeseries of concentrations of the molecular species.

Note: enzyme kinetic parameters are typically not inferred from metabolite timeseries data using the methods we will describe, but instead from specific enzyme assays. However, at the moment, these assays are limited to studying one enzyme at a time. The inference approaches described in this post can leverage data from emerging high-throughput assays.

The determination of the kinetic parameters for the enzymatic reactions of life is a major project, and reported values have been tabulated in databases such as BRENDA. However, my experience with these databases has been that the reported kinetic parameters are not internally consistent.

1.3 The Michaelis-Menten/Briggs-Haldane Approximation

Two assumptions commonly made at this point are:

  1. to assume the initial substrate concentration is much larger than the enzyme concentration ($[\mathrm{S_0}] \gg [\mathrm{E_0}]$).
  2. to suppose that the rates of enzyme-substrate association ($\kon$) and dissociation ($\koff$) are greater than the rates of catalysis and uncatalysis (i.e. $\kon$, $\koff$ $\gg$ $\kcat$, $\kuncat$).

These assumptions permit a timescale separation argument called the "Quasi-Steady-State Approximation" (QSSA) in which we set $\dESdt = 0$, which enables the derivation of the traditional Reversible Michaelis-Menten/Briggs-Haldane expression:

$$\begin{aligned} \frac{d[\mathrm{P}]}{dt} &= \frac{ \frac{\kcat \, [\mathrm{E_T}] [\mathrm{S}]}{K_{m,\mathrm{S}}} - \frac{\koff \, [\mathrm{E_T}] [\mathrm{P}]}{K_{m,\mathrm{P}}}} {1+\frac{[\mathrm{S}]}{K_{m,\mathrm{S}}} + \frac{[\mathrm{P}]}{K_{m,\mathrm{P}}}} \\ \\ \frac{d[\mathrm{S}]}{dt} &= -\frac{d[\mathrm{P}]}{dt} \end{aligned}$$

in which we have introduced the "Michaelis Constants": $K_{m,\mathrm{S}} = \frac{\koff + \kcat}{\kon}$ and $K_{m,\mathrm{P}} = \frac{\koff + \kcat}{\kuncat}$.

The QSSA reduces the system from 4 variables to 2. However, there are still 4 kinetic parameters to estimate in this reduced model.

Note: another assumption typically made at this point is to assume that catalysis is irreversible ($\kuncat = 0$), leading to a further simplified expression for the rate of product formation $\frac{d[\mathrm{P}]}{dt}$. However, this assumption is quite often inaccurate, so we will not make it.

2. Exploring the Forward Model

2.1 A Standard Example

Before we explore techniques to estimate enzyme kinetic parameters from timeseries data, we need to generate timeseries data to begin with. We can accomplish that by fixing kinetic parameters, then solving the forward problem. It will turn out that integrating the differential equations forwards is a subroutine of both approaches to the inverse problem we'll see in this post, so developing a method for the forward problem is hardly wasted effort.

In order to produce a trajectory, we need to set initial conditions. We'll integrate the reaction kinetics of a hypothetical in vitro experiment, in which a fixed quantity of enzyme and substrate are added to the reaction at the outset, then left to react.

Note: in vivo we would expect the concetration of enzyme to vary over time, and the substrate to be replenished. We will generalize this approach to a more biologically-relevant setting in a future post.

Our initial conditions are:

  • $[E]_0$, the initial enzyme concentration, is set to 1mM (miliMolar, i.e. 1000μM).
  • $[S]_0$, the initial substrate concentration is set to 10mM.
default_initial_conditions = {
    'E_0': 1e3,
    'S_0': 10e3
}

Next, let's fix some generic rate constants:

  • $\kon \,$ of $10^6$ events per Mol per second, or 1 per μM per second, is a typical rate for enzyme-substrate binding.
  • $\koff \,$ of 500/s results in a $\koff$/$\kon$ = $k_d$ of 500 μM, which is a typical $k_d$.
  • $\kcat \,$ is 30/s, a fairly slow but respectable $\kcat$.
  • $\kuncat \,$ of $\frac{\kon}{10}$ is often considered as the boundary for the QSSA to hold (so 0.1 per μM per second). Let's use $\kuncat = \frac{\kon}{100} = $ 0.01/μM for good measure.

Our units are μM and seconds.

default_kinetic_params = {
    'k_on': 1,
    'k_off': 500,
    'k_cat': 30,
    'k_uncat': 0.01
}

def k_ms(p): return (p['k_off'] + p['k_cat']) / p['k_on']
def k_mp(p): return (p['k_off'] + p['k_cat']) / p['k_uncat']

default_kinetic_params['k_ms'] = k_ms(default_kinetic_params)
default_kinetic_params['k_mp'] = k_mp(default_kinetic_params)

There are a variety of numerical methods to integrate systems of differential equations. The most straightforward is Euler's method, which we've written down explicitly for this system below:

We'll also write down Euler's method for the Michaelis-Menten/Briggs-Haldane kinetics

To simulate the kinetics with little derivative steps, we need a step size, and a number of total steps:

dt = 1e-6
steps = 5e5

Now we can integrate the reaction kinetics, and plot the trajectory. We'll overlay the Michaelis-Menten/Briggs-Haldane kinetics with dotted lines on top of the full kinetics (solid).

default_traj_full = euler_full(dt, steps, **default_kinetic_params, **default_initial_conditions)
default_traj_mm = euler_MM(dt, steps, **default_kinetic_params, **default_initial_conditions)
ax = default_traj_full.plot.line(title=param_string(**default_initial_conditions, **default_kinetic_params), color=color(default_traj_full.columns))
default_traj_mm.plot.line(ax=ax, color=color(default_traj_mm.columns), linestyle='--')

fig_style(ax)

We can plainly see the validity of the Quasi-Steady-State Approximation (QSSA) in action in the trajectory: Enzyme E and Substrate S rapidly form Enzyme-Substrate complex ES, the concentration of which remains relatively constant throughout the course of the reaction (recall the QSSA is the approximation that $\dESdt = 0$). Thus, the Michaelis-Menten/Briggs-Haldane product concentration trajectory P_MM well approximates the full kinetics trajectory for the concentration of product P, since the requisite assumptions are valid, namely, (1) $[\mathrm{S_0}] \gg [\mathrm{E_0}]$ and (2) $\kon$, $\koff$ $\gg$ $\kcat$, $\kuncat$.

In practice, Michaelis-Menten/Briggs-Haldane kinetics are often assumed by default, risking the possibility of their misapplication. Let's take this opportunity to explore how the MM/BH kinetics diverge from the full kinetics when we violate the requisite assumptions.

2.2: Breaking the Michaelis-Menten/Briggs-Haldane Assumptions:
      Initial Substrate:Enzyme Ratio

Suppose first the number of molecules of substrate is not much greater than the number of molecules of enzyme, which is a plausible regime for certain reactions in vivo.

initial_conditions = {
    'E_0': 1e3,
    'S_0': 2e3
}

Then P_MM worsens significantly as an estimate of P.

2.3: Breaking the Michaelis-Menten/Briggs-Haldane Assumptions:
      Fast Enzyme-Substrate Complex Kinetics

Suppose further that the rates of association and dissociation of enzyme with subtstrate are not substantially faster than those of enzyme and product.

kinetic_params = {
    'k_on': 0.05,
    'k_off': 1,
    'k_cat': 50,
    'k_uncat': 0.5
}

kinetic_params['k_ms'] = k_ms(kinetic_params)
kinetic_params['k_mp'] = k_mp(kinetic_params)

Then the Michaelis-Menten/Briggs-Haldane kinetics diverge further.

In each of these latter trajectories, the criteria to make the Michaelis-Menten/Briggs-Haldane approximation are violated, leading to poor approximations to the full kinetics. We belabor this point here because in the following, we will seek to infer the parameters of the kinetics, and our inference will fit poorly if we fit to inappropriate kinetic expressions.

2.4: Comparing Integrators

All of the above trajectories are generated by Euler's Method, the most intuitive ODE integration technique. Unfortunately, Euler's Method's naïvete has drawbacks:

  • The order of the error is large with respect to the timestep size.
  • The method is slow, due to the uniform timestep size.

A popular alternative which reconciles these drawbacks is the 4th order Runge Kutta Method (abbreviated RK4) which is the default integration method of scipy's integrate package.

Due to it's superior speed and accuracy, we'll use this method during inference. As a sanity check, we compare our Euler Method code to scipy's RK4:

The lack of deviation gives us confidence both integration techniques are accurate. Meanwhile,

f'our naïve code takes {round(euler_time, 2)}s, whereas the optimized scipy code takes {round(scipy_time, 4)}s to generate the same trajectory.'
'our naïve code takes 1.16s, whereas the optimized scipy code takes 0.0062s to generate the same trajectory.'

3. Inference

We have seen how the trajectory of the chemical system is a function of the kinetic parameters. We would now like to invert that function to recover the kinetic parameters from an observed trajectory.

Suppose we know the initial concentrations of Enzyme E and Substrate S, and we measure the concentration of product P over the course of the reaction, which yields the following dataset:

We will supply noiseless measurements to our inference algorithms. However, our inference procedures will assume noise in the measurements.

Note: If we had measured $\dPdt$ for various (linearly independent) concentrations of $[\mathrm{S}]$, $[\mathrm{P}]$, and $[\mathrm{E}]_0$ (as in an in vitro enzyme assay) we could use a nonlinear regression with the Michaelis-Menten/Briggs-Haldane expression for $\dPdt$. Concretely, supposing we had a set of measurements for the variables in blue, a nonlinear regression would permit us to fit the constants in red: $$\color{blue}{\dPdt} = \frac{ \frac{\color{red}{\kcat} \, \color{blue}{[\mathrm{E_T}]} \color{blue}{[\mathrm{S}]}} {\color{red}{K_{m,\mathrm{S}}}} - \frac{\color{red}{\koff} \, \color{blue}{[\mathrm{E_T}]} \color{blue}{[\mathrm{P}]}}{\color{red}{K_{m,\mathrm{P}}}}} {1+\frac{\color{blue}{[\mathrm{S}]}}{\color{red}{K_{m,\mathrm{S}}}} + \frac{\color{blue}{[\mathrm{P}]}}{\color{red}{K_{m,\mathrm{P}}}}} $$ If we had assumed the reaction were irreversible, the Michaelis-Menten/Briggs-Haldane expression would have simplified to $$\color{blue}{\dPdt} = \frac{ \color{red}{\kcat} \, \color{blue}{[\mathrm{E_T}]} \color{blue}{[\mathrm{S}]}} {\color{red}{K_{m,\mathrm{S}}} + \color{blue}{[\mathrm{S}]}} $$ Where $\color{red}{\kcat} \, \color{blue}{[\mathrm{E_T}]}$ is often consolidated as $\color{red}{V_{max}}$. To recap, we take a different approach because:
  1. Simultaneous measurements of the activity many enzymes in cells might inform us about $[\mathrm{S}]$, $[\mathrm{P}]$, and perhaps $[\mathrm{E}]$ but not $\dPdt$. We would also presumably not be able to approximate $\dPdt$ via finite differences, due to the relative sparsity of the measurement in time compared to the rates of the reactions.
  2. This approach would produce spurious estimates of the kinetic parameters in cases in which the Quasi-Steady-State Approximation is invalid (see §2.2, §2.3) which may often be the case in vivo.
At the moment, I believe there are no methods for the inverse problem which are not variants of the two methods I will describe, and importantly, no methods which do not iterate a loop, solving the forward problem at each iteration.

There are two types of approaches to solving this inverse problem. We will explore the simplest variant of each type.

3.1 Bayesian Approach: Inference by Sampling

[We assume the reader is familiar with Bayesian Inference in other settings.]

The goal of the Bayesian approach is to determine a posterior over the 4D space spanned by the kinetic parameters. The posterior is the product of the prior and likelihood (up to a constant factor). Thus the Bayesian Inference approach entails defining a prior and a likelihood.

3.1.1. Prior

If the kinetic parameters of the enzyme under study are not unlike the kinetic parameters of enzymes studied in the past, then the empirical distribution of kinetic parameters of enzymes studied in the past is a good prior for the parameters of this enzyme.

Since databases of observed enzyme kinetic parameters (e.g. BRENDA, SabioRK) are difficult to work with, we'll use a previously curated set of kinetic parameters from the supplement of The Moderately Efficient Enzyme: Evolutionary and Physicochemical Trends Shaping Enzyme Parameters.

If we knew what sort of enzyme we were studying (which EC class) we could narrow our prior to just those kinetic parameters observed for enzymes of that class.

This database lists $k_{\mathrm{m}}$ and $\kcat$ for both "forwards" and "reverse" reactions with respect to which direction biologists believe is "productive", from which we can parlay distributions for $\kms$ and $\kcat$ from reactions in the forwards direction, and $\kmp$ and $\koff$ from reverse reactions.

This plot is surprising: according to this database, enzymes appear to have roughly equal binding affinity for their substrates and products.

On the other hand, they have a fairly strong preference for catalyzing the reaction biologists think of as forwards (~10x).

Since these empirical distributions over $\kms$ and $\kcat$ in the forwards direction and $\kmp$ and $\koff$ in the reverse direction look sufficiently like normals in log space, so we'll treat them as lognormals. However, we would like our inference procedure to estimate the semantic parameters $\kon$, $\koff$, $\kcat$, and $\kuncat$. We can rearrange the expressions for $\kms$ and $\kmp$ to get expressions for the two parameters we're missing:

$$ \kon = \frac{\koff + \kcat}{\kms} \quad \mathrm{and} \quad \kuncat = \frac{\koff + \kcat}{\kmp}$$

Conveniently, the ratio of lognormal variables $\frac{X_1}{X_2}$ is also lognormal with $\mu_{1/2} = \mu_1 - \mu_2$ and $ \sigma^2_{1/2} = \sigma^2_1 + \sigma^2_2 - \sigma_{x_1, x_2}$. In order to use that fact, we say the sum of the random variables $\koff + \kcat$ is also log-normally distributed. We compute its mean and variance empirically.

kcat_plus_koff = pd.Series(np.repeat(empirical_kcat.values, len(empirical_koff)) +
                           np.tile(empirical_koff.values, len(empirical_kcat)))

log_kcat_plus_koff_mean = np.log10(kcat_plus_koff).mean()
log_kcat_plus_koff_var = np.log10(kcat_plus_koff).var()

This permits us to produce empirical distributions for $\kon$ and $\kuncat$,

log_kon_normal = scipy.stats.norm(loc=log_kcat_plus_koff_mean-log_empirical_kms.mean(),
                                  scale=sqrt(log_kcat_plus_koff_var+log_empirical_kms.var()))

log_kuncat_normal = scipy.stats.norm(loc=log_kcat_plus_koff_mean-log_empirical_kmp.mean(),
                                     scale=sqrt(log_kcat_plus_koff_var+log_empirical_kmp.var()))

which, along with our empirical distributions for $\koff$ and $\kcat$, define a prior over the 4 kinetic parameters we wish to infer.

We might ask whether these are correlated lognormals

Not enough to include covariances in the prior. We set the prior covariance to be a diagonal matrix:

prior_cov = np.diag([log_kon_normal.var(),
                     log_koff_normal.var(),
                     log_kcat_normal.var(),
                     log_kuncat_normal.var()])
def prior_pdf(k_on=None, k_off=None, k_cat=None, k_uncat=None):
    return (
    log_kon_normal.pdf(k_on) *
    log_koff_normal.pdf(k_off) *
    log_kcat_normal.pdf(k_cat) *
    log_kuncat_normal.pdf(k_uncat))

def prior_logpdf(k_on=None, k_off=None, k_cat=None, k_uncat=None):
    return (
    log_kon_normal.logpdf(k_on) +
    log_koff_normal.logpdf(k_off) +
    log_kcat_normal.logpdf(k_cat) +
    log_kuncat_normal.logpdf(k_uncat))

def sample_prior():
    # returns [k_on, k_off, k_cat, k_uncat]
    return {
    'k_on': log_kon_normal.rvs(),
    'k_off': log_koff_normal.rvs(),
    'k_cat': log_kcat_normal.rvs(),
    'k_uncat': log_kuncat_normal.rvs()}

Now that we have a prior, let's examine where the default parameters introduced in §2.1 land in this distribution. We had claimed they were "typical".

3.1.2. Likelihood

We need to define a likelihood $p(D|\theta)$ which measures the probability of producing the observed data given settings of the kinetic parameters $\theta = \{\kon, \koff, \kcat, \kuncat\}$. Our data $D = \{ \color{00008b}{ [\mathrm{P}]_t } \, \color{black}{ ; t \in 0...0.5\}}$ are an observed trajectory of concentrations of reaction product P. Each setting of the kinetic parameters corresponds to a trajectory of concentrations of P (via a numerical integration). Intuitively, parameter sets which result in trajectories very near the observed trajectory are more likely. Therefore, our likelihood should measure the distance between the observed $\{ \color{00008b}{ [\mathrm{P}]_t } \color{black}{ \} }$ and predicted $\{ \color{blue}{ [\mathrm{P}]_t } \color{black}{ \} }$.

How far should the predicted trajectory be allowed to stray from the measured $\{ \color{00008b}{ [\mathrm{P}]_t } \color{black}{ \} }$? The likelihood is really our statement about the presumed noise in our measurements. If we believe our measurements to be noiseless, then our likelihood should concentrate tightly around our measurements (a dirac $\delta$ in the limit), and we would only admit kinetic parameters that interpolate the observed $\{ \color{00008b}{ [\mathrm{P}]_t } \color{black}{ \} }$ almost exactly. In reality, no measurement is noiseless, so we propose the following noise model:

Supposing the detection of each molecule of P is an independent binary random variable with error rate $\sigma$ then random variable $\color{red}{[\mathrm{P}]_t}$ is gaussian-distributed $\sim \mathcal{N}( \color{00008b}{[\mathrm{P}]_t} \color{black}, \sigma \sqrt{ \color{00008b}{[\mathrm{P}]_t }}\color{black} )$. The variance of the gaussian grows as the square root of the mean, via a Central Limit Theorem argument. We can represent this noise model (and consequently, likelihood) visually as:

σ = 5 # arbitrary magic number represents detection noise level

Concretely, the likelihood is the product distribution of each of the gaussian marginals centered around the measurements. These form a multivariate normal, diagonal since we neglect to add covariances.

$$p(D|\theta) = \displaystyle \prod_{t=0}^{0.5} p_t(\color{blue}{[\mathrm{P}]_t} \color{black}) \textrm{ where } p_t \textrm{ is the density of } \color{red}{[\mathrm{P}]_t} \color{black} \sim \mathcal{N}( \color{00008b}{[\mathrm{P}]_t} \color{black}, \sigma \sqrt{ \color{00008b}{[\mathrm{P}]_t }}\color{black} )$$

Which leaves us with a "hyperparameter" $\sigma$.

likelihood_dist = multivariate_normal(mean=observations.values[1:], cov=σ * np.diag(sqrt(observations.values[1:])))

def likelihood_logpdf(ut): return likelihood_dist.logpdf(ut)

3.1.3. Metropolis-Hastings

We can now evaluate the prior $p(\theta)$ and the likelihood $p(D|\theta)$ of kinetic parameters $\theta = \{\kon, \koff, \kcat, \kuncat\}$. Those two distributions permit us to elaborate an Markov Chain Monte Carlo (MCMC) routine to sample from the posterior $p(\theta|D) \propto p(D|\theta) \cdot p(\theta)$. The algorithm is as follows:

Repeat:

  1. Draw kinetic parameters from the proposal distribution.
  2. Integrate the system with the proposed kinetic parameters.
  3. Evaluate the likelihood of the trajectory generated in step 2.
  4. Accept/Reject the proposal by a Metropolis-Hastings criterion.
  5. Append the current kinetic parameters to the Markov Chain.
  6. Construct a proposal distribution around the current kinetic parameters.

Since the likelihood assigns most of the probability mass to a fairly narrow region of parameter space, most parameter sets have extremely low probability. In order to preserve some numerical stability, we log-transform the typical Metropolis-Hastings expressions. So typically $π_t = \mathrm{likelihood\_pdf}(u_t) \cdot \mathrm{prior\_pdf}(θ_t)$ and the acceptance criterion is $\frac{π_{t+1}}{π_t} > \mathrm{rand}([0,1])$. In log space, the acceptance criterion becomes: $\log(π_{t+1}) - \log(π_t) > \log(\mathrm{rand}([0,1]))$ with $\log(π_t) = \mathrm{likelihood\_logpdf}(u_t) + \mathrm{prior\_logpdf}(θ_t)$.

def MH_MCMC(chain_length=1e3):

    θt = sample_prior()
    ut = integrate(θt)
    πt = likelihood_logpdf(ut) + prior_logpdf(**θt)
    if all(ut == 0): return MH_MCMC(chain_length)

    cov = np.eye(4) * 5e-4
    i = 0
    accept_ratio = 0
    chain = []
    samples = []

    while i < chain_length:

        θtp1 = proposal(θt, cov)
        utp1 = integrate(θtp1)
        πtp1 = likelihood_logpdf(utp1) + prior_logpdf(**θtp1)

        if πtp1 - πt > np.log(np.random.rand()):

            θt, ut, πt = θtp1, utp1, πtp1
            accept_ratio += 1

        chain.append(θt)
        samples.append(ut)

        i += 1

        if i % 100 == 0 and i > 300:
            # cov = pd.DataFrame(chain[100:]).cov()
            print(i, end='\r')

    chain = pd.DataFrame(chain)
    samples = pd.DataFrame(np.hstack((np.zeros((len(chain), 1)), samples)), columns=observations.index)
    accept_ratio = accept_ratio/chain_length

    return chain, samples, accept_ratio

Our proposal density for the time being can be a simple isotropic gaussian around the current parameters.

def proposal(θt, cov):

    μ = [θt['k_on'], θt['k_off'], θt['k_cat'], θt['k_uncat']]

    θtp1 = dict(zip(['k_on', 'k_off', 'k_cat', 'k_uncat'], np.random.multivariate_normal(μ, cov)))

    return θtp1

Now let's put it into practice:

chain_length = 1e3
chain, samples, accept_ratio = MH_MCMC(chain_length=chain_length)
print('accept_ratio:', accept_ratio)
accept_ratio: 0.072
def fig_style_3(ax):
    for side in ["right","top"]: ax.spines[side].set_visible(False)
    ax.set_xlabel('chain',  weight='bold')
    ax.set_ylabel('log parameter values',  weight='bold')
def plot_chain(chain, ax=None):
    if ax is None: fig, ax = plt.subplots()
    chain.plot.line(xlim=(0,len(chain)), color=[c[param_name] for param_name in chain.columns], ax=ax)

    for param_name in chain.columns:
        param_value = default_kinetic_params[param_name]
        ax.axhline(np.log10(param_value), lw=0.5, color=c[param_name], linestyle='--')
        ax.fill_between(np.arange(len(chain)), chain[param_name], np.repeat(np.log10(param_value), len(chain)), color=c[param_name], alpha=0.05)

    fig_style_3(ax)
plot_chain(chain)
sns.pairplot(chain, kind="kde")
resize_fig(600, 600)
def plot_samples(samples, ax=None):
    if ax is None: fig, ax = plt.subplots()
    observations.plot.line(marker='o', lw=0, color=c['P'], ylim=(-300, 10800), ax=ax, legend=True)
    samples.T.plot.line(colormap=plt.get_cmap('plasma'), alpha=0.1, ax=ax, legend=False, zorder=1)
    fig_style(ax)
plot_samples(samples)
def MCMC_run():
    chain, samples, accept_ratio = MH_MCMC(chain_length=chain_length)
    fig, axs = plt.subplots(1, 2)
    plot_chain(chain, ax=axs[0])
    plot_samples(samples, ax=axs[1])
    print('accept_ratio:', accept_ratio)
MCMC_run()
accept_ratio: 0.08
MCMC_run()
accept_ratio: 0.353
MCMC_run()
accept_ratio: 0.387

A few things pop out from the above chains:

  1. It appears to be possible to closely fit the observed data with very different parameter sets than the ones used to generate the observed trajectory.
  2. It appears that our chain is finding local maxima in the posterior and struggling to escape.
    Note: Some of the above trajectories appear non-smooth. That’s only a consequence of the fact we’re visualizing a coarse sampling of those trajectories -- the underlying trajectories are sampled much more densely by scipy’s integrator.
modified_traj_full = euler_full_modified(dt, steps, **default_kinetic_params, **default_initial_conditions)
ax = modified_traj_full[['S', 'E', 'ES', 'P']].plot.line(title=param_string(**default_initial_conditions, **default_kinetic_params), color=color(default_traj_full.columns))
fig_style(ax)
ax = modified_traj_full.iloc[500:][['k_off_ES', 'k_on_E_S', 'k_cat_ES', 'k_uncat_E_P']].plot.line(color=color(default_traj_full.columns))
None
$$\begin{aligned} \frac{d[\mathrm{S}]}{dt} &= k_{\mathrm{off}}[\mathrm{ES}] - k_{\mathrm{on}}[\mathrm{E}][\mathrm{S}] \\ \frac{d[\mathrm{E}]}{dt} &= k_{\mathrm{off}}[\mathrm{ES}] - k_{\mathrm{on}}[\mathrm{E}][\mathrm{S}] + k_{\mathrm{cat}}[\mathrm{ES}] - k_{\mathrm{uncat}}[\mathrm{E}][\mathrm{P}] \\ \frac{d[\mathrm{ES}]}{dt} &= - k_{\mathrm{off}}[\mathrm{ES}] + k_{\mathrm{on}}[\mathrm{E}][\mathrm{S}] - k_{\mathrm{cat}}[\mathrm{ES}] + k_{\mathrm{uncat}}[\mathrm{E}][\mathrm{P}] \\ \frac{d[\mathrm{P}]}{dt} &= k_{\mathrm{cat}}[\mathrm{ES}] - k_{\mathrm{uncat}}[\mathrm{E}][\mathrm{P}] \end{aligned}$$

3.2 Frequentist Approach: Inference by Optimization

In the previous section, we wandered around parameter space, biasing our random walk towards regions of the parameter space where both the prior probability and likelihood were greater. After a certain number of samples, the samples from our random walk constitute an (asymptotically exact) estimate of the entire probability distribution over the values of the kinetic parameters.

An alternative approach begins with another premise:

  • Suppose we want to incorporate no prior knowledge about our particular enzyme's kinetic parameters, and let the timeseries data alone govern our determination of the enzyme's parameters.
  • Suppose as well that instead of searching for a distribution of plausible parameters, we're only interested in finding the single most likely set of parameters.

These two choices recast the inference task as an optimization problem.

Optimization problems require an objective, such as minimizing a loss (or cost) function. Let's use the conventional squared error between our trajectory $u$ and the data $d$: $G(u(t, \theta)) = \sum_t (\| d(t) - u(t, \theta) \|_2)^2$ illustrated below as the loss function we'll minimize.

3.2.1 Forward Sensitivities

In order to optimize our parameters $\theta$ with respect to to our loss function $G$, we need a means to evaluate the gradient of the loss with respect to the parameters. Naively:

$$\frac{dG(u(t, \theta))}{d\theta} = \frac{d}{d\theta} \sum_t(\| d(t) - u(t, \theta) \|_2)^2 = \sum_t \left[ 2(d(t) - u(t, \theta)) \frac{du(t, \theta)}{d\theta} \right]$$

However, the quantity $\frac{du(t, \theta)}{d\theta}$ is not immediately available. We can derive it as follows:

Our original differential equation is $\frac{du(t, \theta)}{dt} = f(u(t, \theta), \theta)$. If we take $\frac{\partial}{\partial\theta} \left[ \frac{du(t, \theta)}{dt} \right] = \frac{\partial}{\partial\theta} \left[ f(u(t, \theta), \theta) \right]$, we can rearrange as
$\frac{d}{dt} \left[ \frac{\partial u(t, \theta)}{\partial\theta} \right] = \frac{\partial}{\partial\theta} \left[ f(u(t, \theta), \theta) \right]$ and then integrate over $t$ for

$$\int_{t_0}^T\frac{d}{dt} \left[ \frac{\partial u(t, \theta)}{\partial\theta} \right]dt = \int_{t_0}^T\frac{\partial}{\partial\theta} \left[ f(u(t, \theta), \theta) \right]dt = \int_{t_0}^T \left[ \frac{\partial f}{\partial u} \Big|_{u(t, \theta), \theta} \frac{\partial u}{\partial \theta} \Big|_t + \frac{\partial f}{\partial \theta} \Big|_{u(t, \theta), \theta} \right] dt$$

Which is exactly $\frac{du(t, \theta)}{d\theta}$. Surprisingly, what we've done is define an ODE whose solution (integral) is the gradient. This ODE is usually called the forward sensitivity ODE. We can solve (integrate) both the original ODE and the sensitivity ODE forwards in time together.

But first, we need to understand the constituent expressions: $\frac{\partial f}{\partial u} \Big|_{u(t, \theta), \theta}$ , $\frac{\partial u}{\partial \theta} \Big|_t$ and $\frac{\partial f}{\partial \theta} \Big|_{u(t, \theta), \theta}$

Recall,

$$\frac{du}{dt} = \frac{d}{dt}\begin{bmatrix}[\mathrm{S}] \\ [\mathrm{E}] \\ [\mathrm{ES}] \\ [\mathrm{P}] \end{bmatrix} = \begin{bmatrix} k_{\mathrm{off}}[\mathrm{ES}] - k_{\mathrm{on}}[\mathrm{E}][\mathrm{S}] \\ k_{\mathrm{off}}[\mathrm{ES}] - k_{\mathrm{on}}[\mathrm{E}][\mathrm{S}] + k_{\mathrm{cat}}[\mathrm{ES}] - k_{\mathrm{uncat}}[\mathrm{E}][\mathrm{P}] \\ - k_{\mathrm{off}}[\mathrm{ES}] + k_{\mathrm{on}}[\mathrm{E}][\mathrm{S}] - k_{\mathrm{cat}}[\mathrm{ES}] + k_{\mathrm{uncat}}[\mathrm{E}][\mathrm{P}] \\ k_{\mathrm{cat}}[\mathrm{ES}] - k_{\mathrm{uncat}}[\mathrm{E}][\mathrm{P}] \end{bmatrix} = f(u(t, \theta), \theta)$$

$\frac{\partial f}{\partial u} \Big|_{u(t, \theta), \theta}$ is the derivative of the derivative with respect to the state. Since both are 4D, this is a 4x4 Jacobian:

$$\frac{df}{du} = \begin{bmatrix}\frac{df}{[\mathrm{S}]} & \frac{df}{[\mathrm{E}]} & \frac{df}{[\mathrm{ES}]} & \frac{df}{[\mathrm{P}]} \end{bmatrix} = \begin{bmatrix} -k_{\mathrm{on}}[\mathrm{E}] & -k_{\mathrm{on}}[\mathrm{S}] & k_{\mathrm{off}} & 0 \\ -k_{\mathrm{on}}[\mathrm{E}] & -k_{\mathrm{on}}[\mathrm{S}] - k_{\mathrm{uncat}}[\mathrm{P}] & k_{\mathrm{off}} + k_{\mathrm{cat}} & -k_{\mathrm{uncat}} \\ k_{\mathrm{on}}[\mathrm{E}] & k_{\mathrm{on}}[\mathrm{S}] + k_{\mathrm{uncat}}[\mathrm{P}] & -k_{\mathrm{off}} - k_{\mathrm{cat}} & k_{\mathrm{uncat}} \\ 0 & k_{\mathrm{uncat}}[\mathrm{P}] & k_{\mathrm{cat}} & -k_{\mathrm{uncat}} \end{bmatrix}$$

$\frac{\partial f}{\partial \theta} \Big|_{u(t, \theta), \theta}$ is the derivative of the derivative with respect to one of the parameters.

$$\frac{\partial f}{\partial k_{\mathrm{on}}} = \begin{bmatrix} -[\mathrm{E}][\mathrm{S}] \\ -[\mathrm{E}][\mathrm{S}] \\ [\mathrm{E}][\mathrm{S}] \\ 0 \end{bmatrix}, \qquad \frac{\partial f}{\partial k_{\mathrm{off}}} = \begin{bmatrix} [\mathrm{ES}] \\ [\mathrm{ES}] \\ -[\mathrm{ES}] \\ 0 \end{bmatrix}, \qquad \frac{\partial f}{\partial k_{\mathrm{cat}}} = \begin{bmatrix} 0 \\ [\mathrm{ES}] \\ -[\mathrm{ES}] \\ [\mathrm{ES}] \end{bmatrix}, \qquad \frac{\partial f}{\partial k_{\mathrm{uncat}}} = \begin{bmatrix} 0 \\ -[\mathrm{E}][\mathrm{P}] \\ [\mathrm{E}][\mathrm{P}] \\ -[\mathrm{E}][\mathrm{P}] \end{bmatrix}, \qquad $$

$\frac{\partial u}{\partial \theta} \Big|_t$ is the variable of integration, which means we only need to define a boundary condition for it, in this case, an initial value:

$$ \frac{\partial u}{\partial \theta} \Big|_{t_0} = \frac{\partial}{\partial \theta} u(0, \theta) $$

But since in our case $u(0, \theta) = u(0) = 0$ does not depend on $\theta$, $\frac{\partial u}{\partial \theta} \Big|_{t_0} = 0$.

Now we're ready to augment our original Euler method to compute both $\int_{t_0}^T\frac{du(t, \theta)}{dt} dt$ as before and add $\int_{t_0}^T\frac{\partial}{\partial\theta} \left[ f(u(t, \theta), \theta) \right] dt$.

dt
1e-06
traj_full_sensitivities = euler_full_sensitivities(dt, steps, **default_initial_conditions, **default_kinetic_params)

Recall, computing the sensitivity of the solution with respect to the parameters was in service of computing the gradient of our loss function with respect to the parameters:

$$\frac{dG(u(t, \theta))}{d\theta} = \sum_t \left[ 2(d(t) - u(t, \theta)) \frac{du(t, \theta)}{d\theta} \right]$$
def gradient_of_loss(solution_with_current_parameters, sensitivities):

    for obs in observations:
2*(observations - traj_full_sensitivities.loc[observations.index, 'P'])
0.00     0.000000
0.05    24.297934
0.10    45.780001
0.15    63.023211
0.20    74.114254
0.25    76.746393
0.30    69.499163
0.35    54.384680
0.40    37.369056
0.45    23.907934
0.50    15.430199
Name: P, dtype: float64
traj_full_sensitivities.loc[observations.index, ['P_k_on', 'P_k_off', 'P_k_cat', 'P_k_uncat']]
S_k_on E_k_on ES_k_on P_k_on S_k_off E_k_off ES_k_off P_k_off S_k_cat E_k_cat ES_k_cat P_k_cat S_k_uncat E_k_uncat ES_k_uncat P_k_uncat
0.00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00
0.05 -7.697091e+02 5.372020e+01 1.626208e+03 8.187395e+02 1.114331e+00 -4.853516e-02 -2.339312e+00 -1.136811e+00 -1.136485e+02 1.560678e+01 2.133312e+02 1.336529e+02 3.317268e+03 -3.862393e+02 -5.105741e+03 -3.798363e+03
0.10 -2.283233e+04 4.636704e+03 5.524998e+04 3.066560e+04 3.312126e+01 -6.654441e+00 -8.009004e+01 -4.424026e+01 -3.355416e+03 6.999644e+02 8.072987e+03 4.570204e+03 1.139885e+05 -2.304153e+04 -2.646747e+05 -1.532638e+05
0.15 -5.223955e+05 1.571180e+05 1.478646e+06 8.569192e+05 7.580002e+02 -2.278397e+02 -2.145359e+03 -1.242581e+03 -7.671888e+04 2.310991e+04 2.171062e+05 1.260574e+05 2.639340e+06 -7.920160e+05 -7.437362e+06 -4.323537e+06
0.20 -1.011458e+07 4.678030e+06 3.498549e+07 2.184601e+07 1.467669e+04 -6.787761e+03 -5.076501e+04 -3.169730e+04 -1.485339e+06 6.870385e+05 5.137648e+06 3.208668e+06 5.115625e+07 -2.365258e+07 -1.768657e+08 -1.104505e+08
0.25 -1.488822e+08 1.150565e+08 6.740897e+08 4.858381e+08 2.160349e+05 -1.669518e+05 -9.781338e+05 -7.049680e+05 -2.186342e+07 1.689622e+07 9.899047e+07 7.134685e+07 7.530758e+08 -5.819550e+08 -3.409505e+09 -2.457295e+09
0.30 -1.332110e+09 1.898536e+09 8.734387e+09 8.040637e+09 1.932952e+06 -2.754862e+06 -1.267399e+07 -1.166732e+07 -1.956208e+08 2.788010e+08 1.282648e+09 1.180773e+09 6.738169e+09 -9.603234e+09 -4.418049e+10 -4.067107e+10
0.35 -5.697001e+09 1.652603e+10 6.161716e+10 8.126081e+10 8.266609e+06 -2.398003e+07 -8.940932e+07 -1.179131e+08 -8.366065e+08 2.426854e+09 9.048502e+09 1.193318e+10 2.881705e+10 -8.359317e+10 -3.116763e+11 -4.110382e+11
0.40 -1.087360e+10 6.913919e+10 2.211143e+11 4.536243e+11 1.577809e+07 -1.003241e+08 -3.208470e+08 -6.582296e+08 -1.596792e+09 1.015311e+10 3.247072e+10 6.661489e+10 5.500184e+10 -3.497256e+11 -1.118459e+12 -2.294557e+12
0.45 -1.098488e+10 1.588535e+11 4.625764e+11 1.500032e+12 1.593956e+07 -2.305038e+08 -6.712195e+08 -2.176614e+09 -1.613133e+09 2.332770e+10 6.792950e+10 2.202802e+11 5.556478e+10 -8.035271e+11 -2.339845e+12 -7.587585e+12
0.50 -7.488147e+09 2.506734e+11 6.945067e+11 3.428388e+12 1.086564e+07 -3.637386e+08 -1.007761e+09 -4.974749e+09 -1.099637e+09 3.681148e+10 1.019885e+11 5.034600e+11 3.787736e+10 -1.267980e+12 -3.513018e+12 -1.734178e+13
ax = traj_full_sensitivities[['S', 'E', 'ES', 'P']].plot.line(title=param_string(**default_initial_conditions, **default_kinetic_params), color=color(traj_full.columns))

fig_style(ax)
ax = traj_full_sensitivities[['S_k_on','E_k_on','ES_k_on','P_k_on']].plot.line()
ax = traj_full_sensitivities[['S_k_on','E_k_on','ES_k_on','P_k_on']].plot.line(logy=True)
ax = traj_full_sensitivities[['S_k_on','S_k_off','S_k_cat','S_k_uncat']].plot.line()

3.2.2. Adjoint Method

$$ $$
# 1. solve forwards: euler_full
# 2. solve adjoint DE
# 3. evaluate integral
adjoint_euler_full(dt, steps, u=, data=default_traj_full.values, *kinetic_params)
def integrate(kinetic_params, dt=dt, initial_conditions=default_initial_conditions):

    [(_, E_0), (_, S_0)] = initial_conditions.items()
    t_eval = observations.index[1:]
    t_span = (0, t_eval[-1])
    y0 = [S_0, E_0, 0, 0]
    kinetic_params = {name: 10**val for name, val in kinetic_params.items()}
    fun = lambda t,y: dy_full(t, y, E_0=E_0, S_0=S_0, **kinetic_params)

    try:
        sol = solve_ivp(fun, t_span, y0, t_eval=t_eval, first_step=dt, max_step=1e-2, method='LSODA')
        return sol.y[3] # Product
    except:
        return np.zeros(10)
default_traj_full
default_kinetic_params
default_initial_conditions
S E ES P
0.000000 10000.000000 1000.000000 0.000000 0.000000
0.000001 9990.000000 990.000000 10.000000 0.000000
0.000002 9980.114900 980.115200 19.884800 0.000300
0.000003 9970.343180 970.344077 29.655923 0.000897
0.000004 9960.683345 960.685131 39.314869 0.001786
... ... ... ... ...
0.499996 1389.049778 265.222847 734.777153 7876.173069
0.499997 1389.048759 265.222982 734.777018 7876.174223
0.499998 1389.047740 265.223117 734.776883 7876.175377
0.499999 1389.046721 265.223252 734.776748 7876.176531
0.500000 1389.045702 265.223386 734.776614 7876.177685

500000 rows × 4 columns

4. Conclusions

Michaelis menten isn't even right, that's why the databases are such a mess. You need the right scheme, and the right equations

5. References

  • Thermodynamics of Glycolysis/III%3A_Reactivity_in_Organic_Biological_and_Inorganic_Chemistry_1/08%3A_Mechanisms_of_Glycolysis/8.08%3A_Thermodynamics_of_Glycolysis)